With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code will be publicly available at https://andytang15.github.io/FLAG3D.
translated by 谷歌翻译
As the COVID-19 pandemic puts pressure on healthcare systems worldwide, the computed tomography image based AI diagnostic system has become a sustainable solution for early diagnosis. However, the model-wise vulnerability under adversarial perturbation hinders its deployment in practical situation. The existing adversarial training strategies are difficult to generalized into medical imaging field challenged by complex medical texture features. To overcome this challenge, we propose a Contour Attention Preserving (CAP) method based on lung cavity edge extraction. The contour prior features are injected to attention layer via a parameter regularization and we optimize the robust empirical risk with hybrid distance metric. We then introduce a new cross-nation CT scan dataset to evaluate the generalization capability of the adversarial robustness under distribution shift. Experimental results indicate that the proposed method achieves state-of-the-art performance in multiple adversarial defense and generalization tasks. The code and dataset are available at https://github.com/Quinn777/CAP.
translated by 谷歌翻译
医学视觉和语言预训练提供了一种可行的解决方案,可以从医学图像和文本中提取有效的视觉和语言表示。但是,很少有研究专门研究该领域,以促进医学视觉和语言理解。在本文中,我们提出了一种自我监督的学习范式,该学习范式使用多模式掩盖的自动编码器(M $^3 $ ae),通过从随机掩盖的图像和文本中重新构造缺失的像素和代币来学习跨模式域知识。有三个关键设计可以使这种简单的方法起作用。首先,考虑到视觉和语言的不同信息密度,我们为输入图像和文本采用不同的掩蔽比,其中将较大的掩模比用于图像。其次,我们使用来自不同层的视觉和文本特征来执行重建,以处理视觉和语言中不同级别的抽象。第三,我们为视觉和语言解码器开发了不同的设计(即,视觉的变压器和语言的多层感知器)。为了进行全面的评估并促进进一步的研究,我们构建了包括三个任务的医学视觉和语言基准。实验结果证明了我们方法的有效性,在所有下游任务上都取得了最新的结果。此外,我们进行进一步的分析,以更好地验证方法的不同组成部分和预训练的各种设置。源代码可在〜\ url {https://github.com/zhjohnchan/m3ae}中获得。
translated by 谷歌翻译
在本报告中,我们建议针对四个EGO4D挑战任务,包括自然语言查询(NLQ),MOMMER QUERY(MQ),对象状态变更分类(OSCC),以及PNR定位(PNR)。尤其是,我们将最近发布的EGO4D数据集\ cite {grauman2021ego4d}从预处理数据集,预处理目标和开发集中从egecentric vlp中提升。基于上述三个设计,我们开发了一个验证的视频语言模型,该模型能够将其以自我为中心的视频文本表示或仅视频表示形式转移到几个视频下游任务中。我们的Egentric VLP在NLQ上实现10.46r@1&iou @0.3,MQ上的10.33地图,OSCC上的74%ACC,PNR上的0.67秒错误。该代码可在https://github.com/showlab/egovlp上找到。
translated by 谷歌翻译
在本报告中,我们为Epic-kitchens-100多实体检索(miR)挑战提出了一个基于视频的预处理(VLP)解决方案\ cite {kevin202222222egovlp}。尤其是,我们将最近发布的EGO4D数据集\ cite {grauman2021ego4d}从预处理数据集,预处理目标和开发集中从egecentric vlp中提升。基于上述三个设计,我们开发了一个预验证的视频语言模型,该模型能够将其自我为中心的视频文本表示为mir基准。此外,我们设计了一种自适应多构度最大损失,以有效地微调模型并为可靠的推理配备双重效果技术。我们最好的单个模型在挑战测试集上获得了强劲的性能,其中47.39%的地图和61.44%的NDCG。该代码可在https://github.com/showlab/egovlp上找到。
translated by 谷歌翻译
尽管深入学习算法已被深入开发用于计算机辅助结核病诊断(CTD),但它们主要依赖于精心注释的数据集,从而导致了大量时间和资源消耗。弱监督的学习(WSL)利用粗粒标签来完成精细的任务,具有解决此问题的潜力。在本文中,我们首先提出了一个新的大规模结核病(TB)胸部X射线数据集,即结核病胸部X射线属性数据集(TBX-ATT),然后建立一个属性辅助的弱点监督的框架来分类并通过利用属性信息来克服WSL方案中的监督不足来定位结核病。具体而言,首先,TBX-ATT数据集包含2000个X射线图像,其中具有七种用于TB关系推理的属性,这些属性由经验丰富的放射科医生注释。它还包括带有11200 X射线图像的公共TBX11K数据集,以促进弱监督检测。其次,我们利用一个多尺度特征交互模型,用于TB区域分类和属性关系推理检测。在TBX-ATT数据集上评估了所提出的模型,并将作为未来研究的稳固基准。代码和数据将在https://github.com/gangmingzhao/tb-attribute-weak-localization上获得。
translated by 谷歌翻译
在这项工作中,我们提出了FedSSO,这是一种用于联合学习的服务器端二阶优化方法(FL)。与以前朝这个方向的工作相反,我们在准牛顿方法中采用了服务器端近似,而无需客户的任何培训数据。通过这种方式,我们不仅将计算负担从客户端转移到服务器,而且还消除了客户和服务器之间二阶更新的附加通信。我们为我们的新方法的收敛提供了理论保证,并从经验上证明了我们在凸面和非凸面设置中的快速收敛和沟通节省。
translated by 谷歌翻译
Vision-Language Pre-Training (VLP) has shown promising capabilities to align image and text pairs, facilitating a broad variety of cross-modal learning tasks. However, we observe that VLP models often lack the visual grounding/localization capability which is critical for many downstream tasks such as visual reasoning. In this work, we propose a novel Position-guided Text Prompt (PTP) paradigm to enhance the visual grounding ability of cross-modal models trained with VLP. Specifically, in the VLP phase, PTP divides the image into $N\times N$ blocks, and identifies the objects in each block through the widely used object detector in VLP. It then reformulates the visual grounding task into a fill-in-the-blank problem given a PTP by encouraging the model to predict the objects in the given blocks or regress the blocks of a given object, e.g. filling `P" or ``O" in aPTP ``The block P has a O". This mechanism improves the visual grounding capability of VLP models and thus helps them better handle various downstream tasks. By introducing PTP into several state-of-the-art VLP frameworks, we observe consistently significant improvements across representative cross-modal learning model architectures and several benchmarks, e.g. zero-shot Flickr30K Retrieval (+4.8 in average recall@1) for ViLT \cite{vilt} baseline, and COCO Captioning (+5.3 in CIDEr) for SOTA BLIP \cite{blip} baseline. Moreover, PTP achieves comparable results with object-detector based methods, and much faster inference speed since PTP discards its object detector for inference while the later cannot. Our code and pre-trained weight will be released at \url{https://github.com/sail-sg/ptp}.
translated by 谷歌翻译
Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner. We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.
translated by 谷歌翻译
基于卷积的方法在医疗图像分割任务中提供了良好的分割性能。但是,这些方法在处理医学图像的边缘时面临以下挑战:(1)以前的基于卷积的方法不关注分割边缘周围前景和背景之间的边界关系,从而导致分割性能的退化当边缘变化时。 (2)卷积层的电感偏置不能适应复杂的边缘变化和多分段区域的聚合,从而导致其性能改善大部分仅限于分割分段区域而不是边缘的范围。为了应对这些挑战,我们提出了MFI(多尺度特征交互)块和英亩(轴向上下文关系编码器)块上的CM-MLP框架,以精确分割医疗图像的边缘。在MFI块中,我们建议级联多尺度MLP(Cascade MLP)同时从网络的较深层中处理所有局部信息,并利用CASCADE多尺度机制逐渐融合离散的本地信息。然后,英亩块用于使深度监督着眼于探索前景和背景之间的边界关系以修改医疗图像的边缘。我们提议的CM-MLP框架的分割准确性(DICE)达到96.96%,96.76%和82.54%的三个基准数据集:CVC-ClinicDB数据集,Sub-Kvasir Dataset和我们的内部数据集,这些数据集分别超过了。最先进的方法。源代码和训练有素的模型将在https://github.com/programmerhyy/cm-mlp上找到。
translated by 谷歌翻译